Jeffreys ’ prior is asymptotically least favorable under entropy risk
نویسندگان
چکیده
We provide a rigorous proof that Jeffreys’ prior asymptotically maximizes Shannon’s mutual information between a sample of size n and the parameter. This was conjectured by Bernard0 (1979) and, despite the absence of a proof, forms the basis of the reference prior method in Bayesian statistical analysis. Our proof rests on an examination of large sample decision theoretic properties associated with the relative entropy or the Kullback-Leibler distance between probability density functions for independent and identically distributed random variables. For smooth finite-dimensional parametric families we derive an asymptotic expression for the minimax risk and for the related maximin risk. As a result, we show that, among continuous positive priors, Jeffreys’ prior uniquely achieves the asymptotic maximin value. In the discrete parameter case we show that, asymptotically, the Bayes risk reduces to the entropy of the prior so that the reference prior is seen to be the maximum entropy prior. We identify the physical significance of the risks by giving two information-theoretic interpretations in terms of probabilistic coding. AMS Subject Class$caGon: Primary 62C10, 62C20; secondary 62F12, 62F15.
منابع مشابه
Asymptotically minimax Bayes predictive densities
fθ log (fθ/f̂) is used to examine various ways of choosing prior distributions; the principal type of choice studied is minimax. We seek asymptotically least favorable predictive densities for which the corresponding asymptotic risk is minimax. A result resembling Stein’s paradox for estimating normal means by the maximum likelihood holds for the uniform prior in the multivariate location family...
متن کاملMinimax redundancy for the class of memoryless sources
Let Xn = (X1; ; Xn) be a memoryless source with unknown distribution on a finite alphabet of size k. We identify the asymptotic minimax coding redundancy for this class of sources, and provide a sequence of asymptotically minimax codes. Equivalently, we determine the limiting behavior of the minimax relative entropy minQ maxP D(PX kQX ), where the maximum is over all independent and identically...
متن کاملBayesian Estimation of Shift Point in Shape Parameter of Inverse Gaussian Distribution Under Different Loss Functions
In this paper, a Bayesian approach is proposed for shift point detection in an inverse Gaussian distribution. In this study, the mean parameter of inverse Gaussian distribution is assumed to be constant and shift points in shape parameter is considered. First the posterior distribution of shape parameter is obtained. Then the Bayes estimators are derived under a class of priors and using variou...
متن کاملAsymptotically minimax regret for exponential families
We study the problem of data compression, gambling and prediction of a sequence x = x1x2...xn from a certain alphabet X , in terms of regret and redundancy with respect to a general exponential family. In particular, we evaluate the regret of the Bayes mixture density and show that it asymptotically achieves their minimax values when variants of Jeffreys prior are used. Keywords— universal codi...
متن کاملAsymptotically Constant-Risk Predictive Densities When the Distributions of Data and Target Variables Are Different
We investigate the asymptotic construction of constant-risk Bayesian predictive densities under the Kullback–Leibler risk when the distributions of data and target variables are different and have a common unknown parameter. It is known that the Kullback–Leibler risk is asymptotically equal to a trace of the product of two matrices: the inverse of the Fisher information matrix for the data and ...
متن کامل